DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...9
Hits 1 – 20 of 167

1
Europarl Direct Translationese Dataset ...
BASE
Show details
2
Europarl Direct Translationese Dataset ...
BASE
Show details
3
Europarl Direct Translationese Dataset ...
BASE
Show details
4
Integrating Unsupervised Data Generation into Self-Supervised Neural Machine Translation for Low-Resource Languages ...
BASE
Show details
5
Comparing Feature-Engineering and Feature-Learning Approaches for Multilingual Translationese Classification ...
BASE
Show details
6
Investigating the Helpfulness of Word-Level Quality Estimation for Post-Editing Machine Translation Output ...
BASE
Show details
7
Multi-Head Highly Parallelized LSTM Decoder for Neural Machine Translation ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.23 Abstract: One of the reasons Transformer translation models are popular is that self-attention networks for context modelling can be easily parallelized at sequence level. However, the computational complexity of a self-attention network is $O(n^2)$, increasing quadratically with sequence length. By contrast, the complexity of LSTM-based approaches is only O(n). In practice, however, LSTMs are much slower to train than self-attention networks as they cannot be parallelized at sequence level: to model context, the current LSTM state relies on the full LSTM computation of the preceding state. This has to be computed n times for a sequence of length n. The linear transformations involved in the LSTM gate and state computations are the major cost factors in this. To enable sequence-level parallelization of LSTMs, we approximate full LSTM context modelling by computing hidden states and gates with the current input and a simple bag-of-words representation ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://dx.doi.org/10.48448/fcc7-e373
https://underline.io/lecture/25374-multi-head-highly-parallelized-lstm-decoder-for-neural-machine-translation
BASE
Hide details
8
Comparing Feature-Engineering and Feature-Learning Approaches for Multilingual Translationese Classification ...
BASE
Show details
9
Modeling Task-Aware MIMO Cardinality for Efficient Multilingual Neural Machine Translation ...
BASE
Show details
10
A Bidirectional Transformer Based Alignment Model for Unsupervised Word Alignment ...
BASE
Show details
11
Automatic classification of human translation and machine translation : a study from the perspective of lexical diversity
Fu, Yingxue; Nederhof, Mark Jan. - : Linkoping University Electronic Press, 2021
BASE
Show details
12
Transformer-based NMT : modeling, training and implementation
Xu, Hongfei. - : Saarländische Universitäts- und Landesbibliothek, 2021
BASE
Show details
13
The European Language Technology Landscape in 2020: Language-Centric and Human-Centric AI for Cross-Cultural Communication in Multilingual Europe
In: Language Resources and Evaluation Conference ; https://hal.archives-ouvertes.fr/hal-02892154 ; Language Resources and Evaluation Conference, ELDA/ELRA, May 2020, Marseille, France ; https://lrec2020.lrec-conf.org/en/ (2020)
BASE
Show details
14
The European Language Technology Landscape in 2020: Language-Centric and Human-Centric AI for Cross-Cultural Communication in Multilingual Europe ...
BASE
Show details
15
The European Language Technology Landscape in 2020: Language-Centric and Human-Centric AI for Cross-Cultural Communication in Multilingual Europe ...
BASE
Show details
16
The European Language Technology Landscape in 2020: Language-Centric and Human-Centric AI for Cross-Cultural Communication in Multilingual Europe ...
BASE
Show details
17
Linguistically inspired morphological inflection with a sequence to sequence model ...
BASE
Show details
18
Probing Word Translations in the Transformer and Trading Decoder for Encoder Layers ...
BASE
Show details
19
Language service provision in the 21st century: challenges, opportunities and educational perspectives for translation studies
In: ISBN: 9788869234934 ; Bologna Process beyond 2020: Fundamental values of the EHEA pp. 297-303 (2020)
BASE
Show details
20
Deep interactive text prediction and quality estimation in translation interfaces
Hokamp, Christopher M.. - : Dublin City University. School of Computing, 2018
In: Hokamp, Christopher M. (2018) Deep interactive text prediction and quality estimation in translation interfaces. PhD thesis, Dublin City University. (2018)
BASE
Show details

Page: 1 2 3 4 5...9

Catalogues
0
0
1
0
5
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
161
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern